3 research outputs found

    MPLS layer 3 VPN

    Get PDF
    Trabalho final de mestrado para obtenção do grau de Mestre em Engenharia de Electrónica e TelecomunicaçõesMultiprotocol Label Switching (MPLS) is the principal technology used in Service Provider. Networks as this mechanism forwarding packet quickly. MPLS is a new way to increase the speed, capability and service supplying abilities for optimization of transmission resources. Service Provider networks use this technology to connect different remote sites. MPLS technology provides lower network delay, effective forwarding mechanism, ascendable and predictable performance of the services which makes it more appropriate for carry out real-time applications such as Voice and video. MPLS can be used to transport any type of data whether it is layer 2 data such as frame relay, Ethernet, ATM data etc. or layer 3 data such as IPV4, IPV6.Multiprotocol Label Switching (MPLS) é a principal tecnologia usada no Service Provider. Redes como este mecanismo fazem o encaminhamento de pacotes de dados rapidamente. MPLS é uma nova maneira de aumentar a velocidade, a capacidades de fornecimento, a capacidade de serviço para otimização de recursos de transmissão. As redes Service Provider usam essa tecnologia para ligar diferentes sites remotos. A tecnologia MPLS oferece menor atraso de rede, mecanismo de encaminhamento eficaz, desempenho e serviços previsíveis o que o tornam mais apropriado para executar aplicativos em tempo real, como voz e vídeo. O MPLS pode ser usado para transportar qualquer tipo de dados, seja dados de camada 2, como frame relay, Ethernet, dados ATM, etc., ou dados da camada 3, como IPV4, IPV6.N/

    Improving Pre-trained Language Models' Generalization

    Full text link
    The reusability of state-of-the-art Pre-trained Language Models (PLMs) is often limited by their generalization problem, where their performance drastically decreases when evaluated on examples that differ from the training dataset, known as Out-of-Distribution (OOD)/unseen examples. This limitation arises from PLMs' reliance on spurious correlations, which work well for frequent example types but not for general examples. To address this issue, we propose a training approach called Mask-tuning, which integrates Masked Language Modeling (MLM) training objectives into the fine-tuning process to enhance PLMs' generalization. Comprehensive experiments demonstrate that Mask-tuning surpasses current state-of-the-art techniques and enhances PLMs' generalization on OOD datasets while improving their performance on in-distribution datasets. The findings suggest that Mask-tuning improves the reusability of PLMs on unseen data, making them more practical and effective for real-world applications

    Gender-tuning: Empowering Fine-tuning for Debiasing Pre-trained Language Models

    Full text link
    Recent studies have revealed that the widely-used Pre-trained Language Models (PLMs) propagate societal biases from the large unmoderated pre-training corpora. Existing solutions require debiasing training processes and datasets for debiasing, which are resource-intensive and costly. Furthermore, these methods hurt the PLMs' performance on downstream tasks. In this study, we propose Gender-tuning, which debiases the PLMs through fine-tuning on downstream tasks' datasets. For this aim, Gender-tuning integrates Masked Language Modeling (MLM) training objectives into fine-tuning's training process. Comprehensive experiments show that Gender-tuning outperforms the state-of-the-art baselines in terms of average gender bias scores in PLMs while improving PLMs' performance on downstream tasks solely using the downstream tasks' dataset. Also, Gender-tuning is a deployable debiasing tool for any PLM that works with original fine-tuning
    corecore